56 research outputs found

    On the Cryptographic Hardness of Local Search

    Get PDF
    We show new hardness results for the class of Polynomial Local Search problems (PLS): - Hardness of PLS based on a falsifiable assumption on bilinear groups introduced by Kalai, Paneth, and Yang (STOC 2019), and the Exponential Time Hypothesis for randomized algorithms. Previous standard model constructions relied on non-falsifiable and non-standard assumptions. - Hardness of PLS relative to random oracles. The construction is essentially different than previous constructions, and in particular is unconditionally secure. The construction also demonstrates the hardness of parallelizing local search. The core observation behind the results is that the unique proofs property of incrementally-verifiable computations previously used to demonstrate hardness in PLS can be traded with a simple incremental completeness property

    On Oblivious Amplification of Coin-Tossing Protocols

    Get PDF
    We consider the problem of amplifying two-party coin-tossing protocols: given a protocol where it is possible to bias the common output by at most ?, we aim to obtain a new protocol where the output can be biased by at most ?* < ?. We rule out the existence of a natural type of amplifiers called oblivious amplifiers for every ?* < ?. Such amplifiers ignore the way that the underlying ?-bias protocol works and can only invoke an oracle that provides ?-bias bits. We provide two proofs of this impossibility. The first is by a reduction to the impossibility of deterministic randomness extraction from Santha-Vazirani sources. The second is a direct proof that is more general and also rules outs certain types of asymmetric amplification. In addition, it gives yet another proof for the Santha-Vazirani impossibility

    Bootstrapping Homomorphic Encryption via Functional Encryption

    Get PDF
    Homomorphic encryption is a central object in modern cryptography, with far-reaching applications. Constructions supporting homomorphic evaluation of arbitrary Boolean circuits have been known for over a decade, based on standard lattice assumptions. However, these constructions are leveled, meaning that they only support circuits up to some a-priori bounded depth. These leveled constructions can be bootstrapped into fully homomorphic ones, but this requires additional circular security assumptions, which are construction-dependent, and where reductions to standard lattice assumptions are no longer known. Alternative constructions are known based on indistinguishability obfuscation, which has been recently constructed under standard assumptions. However, this alternative requires subexponential hardness of the underlying primitives. We prove a new bootstrapping theorem based on functional encryption, which is known based on standard polynomial hardness assumptions. As a result we obtain the first fully homomorphic encryption scheme that avoids both circular security assumptions and super-polynomial hardness assumptions. The construction is secure against uniform adversaries, and can be made non-uniformly secure assuming a generalization of the time-hierarchy theorem, which follows for example from non-uniform ETH. At the heart of the construction is a new proof technique based on cryptographic puzzles and decomposable obfuscation. Unlike most cryptographic reductions, our security reduction does not fully treat the adversary as a black box, but rather makes explicit use of its running time (or circuit size)

    A Note on Perfect Correctness by Derandomization

    Get PDF
    In this note, we show how to transform a large class of erroneous cryptographic schemes into perfectly correct ones. The transformation works for schemes that are correct on every input with probability noticeably larger than half, and are secure under parallel repetition. We assume the existence of one-way functions and of functions with deterministic (uniform) time complexity 2O(n)2^{O(n)} and non-deterministic circuit complexity 2Ω(n)2^{\Omega(n)}. The transformation complements previous results showing that public-key encryption and indistinguishability obfuscation that err on a noticeable fraction of inputs can be turned into ones that are often correct {\em for all inputs}. The technique relies on the idea of ``reverse randomization\u27\u27 [Naor, Crypto 1989] and on Nisan-Wigderson style derandomization, which was previously used in cryptography to obtain non-interactive witness-indistinguishable proofs and commitment schemes [Barak, Ong and Vadhan, Crypto 2003]

    Statistically Sender-Private OT from LPN and Derandomization

    Get PDF
    We construct a two-message oblivious transfer protocol with statistical sender privacy (SSP OT) based on the Learning Parity with Noise (LPN) Assumption and a standard Nisan-Wigderson style derandomization assumption. Beyond being of interest on their own, SSP OT protocols have proven to be a powerful tool toward minimizing the round complexity in a wide array of cryptographic applications from proofs systems, through secure computation protocols, to hard problems in statistical zero knowledge (SZK). The protocol is plausibly post-quantum secure. The only other constructions with plausible post quantum security are based on the Learning with Errors (LWE) Assumption. Lacking the geometric structure of LWE, our construction and analysis rely on a different set of techniques. Technically, we first construct an SSP OT protocol in the common random string model from LPN alone, and then derandomize the common random string. Most of the technical difficulty lies in the first step. Here we prove a robustness property of the inner product randomness extractor to a certain type of linear splitting attacks. A caveat of our construction is that it relies on the so called low noise regime of LPN. This aligns with our current complexity-theoretic understanding of LPN, which only in the low noise regime is known to imply hardness in SZK

    On Strong Simulation and Composable Point Obfuscation

    Get PDF
    The Virtual Black Box (VBB) property for program obfuscators provides a strong guarantee: Anything computable by an efficient adversary given the obfuscated program can also be computed by an efficient simulator with only oracle access to the program. However, we know how to achieve this notion only for very restricted classes of programs. This work studies a simple relaxation of VBB: Allow the simulator unbounded computation time, while still allowing only polynomially many queries to the oracle. We then demonstrate the viability of this relaxed notion, which we call Virtual Grey Box (VGB), in the context of fully composable obfuscators for point programs: It is known that, w.r.t. VBB, if such obfuscators exist then there exist multi-bit point obfuscators (aka ``digital lockers\u27\u27) and subsequently also very strong variants of encryption that are resilient to various attacks, such as key leakage and key-dependent-messages. However, no composable VBB-obfuscators for point programs have been shown. We show fully composable {\em VGB}-obfuscators for point programs under a strong variant of the Decision Diffie Hellman assumption. We show they suffice for the above applications and even for extensions to the public key setting as well as for encryption schemes with resistance to certain related key attacks (RKA)

    On the Impossibility of Approximate Obfuscation and Applications to Resettable Cryptography

    Get PDF
    The traditional notion of {\em program obfuscation} requires that an obfuscation f~\tilde{f} of a program ff computes the exact same function as ff, but beyond that, the code of f~\tilde{f} should not leak any information about ff. This strong notion of {\em virtual black-box} security was shown by Barak et al. (CRYPTO 2001) to be impossible to achieve, for certain {\em unobfuscatable function families}. The same work raised the question of {\em approximate obfuscation}, where the obfuscated f~\tilde{f} is only required to approximate f~\tilde{f}; that is, f~\tilde{f} only agrees with ff on some input distribution. We show that, assuming {\em trapdoor permutations}, there exist families of {\em robust unobfuscatable functions} for which even approximate obfuscation is impossible. That is, obfuscation is impossible even if the obfuscated f~\tilde{f} only agrees with ff with probability slightly more than 12\frac{1}{2}, on a uniformly sampled input (below 12\frac{1}{2}-agreement, the function obfuscated by f~\tilde{f} is not uniquely defined). Additionally, we show that, assuming only one-way functions, we can rule out approximate obfuscation where f~\tilde{f} is not allowed to err, but may refuse to compute ff with probability close to 11. We then demonstrate the power of robust unobfuscatable functions by exhibiting new implications to resettable protocols that so far have been out of our reach. Concretely, we obtain a new non-black-box simulation technique that reduces the assumptions required for resettably-sound zero-knowledge protocols to {\em one-way functions}, as well as reduce round-complexity. We also present a new simplified construction of simultaneously resettable zero-knowledge protocols that does not rely on collision-resistent hashing. Finally, we construct a three-message simultaneously resettable \WI {\em argument of knowledge} (with a non-black-box knowledge extractor). Our constructions are based on a special kind of ``resettable slots that are useful for a non-black-box simulator, but not for a resetting prover

    Indistinguishability Obfuscation: from Approximate to Exact

    Get PDF
    We show general transformations from subexponentially-secure approximate indistinguishability obfuscation (IO) where the obfuscated circuit agrees with the original circuit on a 1/2+ϵ1/2+\epsilon fraction of inputs, into exact indistinguishability obfuscation where the obfuscated circuit and the original circuit agree on all inputs (except for a negligible probability over the coin tosses of the obfuscator). As a step towards our results, which is of independent interest, we also obtain an approximate-to-exact transformation for functional encryption. At the core of our techniques is a method for ``fooling\u27\u27 the obfuscator into giving us the correct answer, while preserving the indistinguishability-based security. This is achieved based on various types of secure computation protocols that can be obtained from different standard assumptions. Put together with the recent results of Canetti, Kalai and Paneth (TCC 2015), Pass and Shelat (Eprint 2015), and Mahmoody, Mohammed and Nemathaji (Eprint 2015), we show how to convert indistinguishability obfuscation schemes in various ideal models into exact obfuscation schemes in the plain model

    Succinct Arguments from Multi-Prover Interactive Proofs and their Efficiency Benefits

    Get PDF
    \emph{Succinct arguments of knowledge} are computationally-sound proofs of knowledge for NP where the verifier\u27s running time is independent of the time complexity tt of the nondeterministic NP machine MM that decides the given language. Existing succinct argument constructions are, typically, based on techniques that combine cryptographic hashing and probabilistically-checkable proofs (PCPs). Yet, even when instantiating these constructions with state-of-the-art PCPs, the prover needs Ω(t)\Omega(t) space in order to run in quasilinear time (i.e., time t \poly(k)), regardless of the space complexity ss of the machine MM. We say that a succinct argument is \emph{complexity preserving} if the prover runs in time t \poly(k) and space s \poly(k) and the verifier runs in time |x| \poly(k) when proving and verifying that a tt-time ss-space random-access machine nondeterministically accepts an input xx. Do complexity-preserving succinct arguments exist? To study this question, we investigate the alternative approach of constructing succinct arguments based on multi-prover interactive proofs (MIPs) and stronger cryptographic techniques: (1) We construct a one-round succinct MIP of knowledge, where each prover runs in time t \polylog(t) and space s \polylog(t) and the verifier runs in time |x| \polylog(t). (2) We show how to transform any one-round MIP protocol to a succinct four-message argument (with a single prover), while preserving the time and space efficiency of the original MIP protocol; using our MIP protocol, this transformation yields a complexity-preserving four-message succinct argument. As a main tool for our transformation, we define and construct a \emph{succinct multi-function commitment} that (a) allows the sender to commit to a vector of functions in time and space complexity that are essentially the same as those needed for a single evaluation of the functions, and (b) ensures that the receiver\u27s running time is essentially independent of the function. The scheme is based on fully-homomorphic encryption (and no additional assumptions are needed for our succinct argument). (3) In addition, we revisit the problem of \emph{non-interactive} succinct arguments of knowledge (SNARKs), where known impossibilities prevent solutions based on black-box reductions to standard assumptions. We formulate a natural (but non-standard) variant of homomorphic encryption having a \emph{homomorphism-extraction property}. We show that this primitive essentially allows to squash our interactive protocol, while again preserving time and space efficiency, thereby obtaining a complexity-preserving SNARK. We further show that this variant is, in fact, implied by the existence of (complexity-preserving) SNARKs

    Weak Zero-Knowledge Beyond the Black-Box Barrier

    Get PDF
    The round complexity of zero-knowledge protocols is a long-standing open question, yet to be settled under standard assumptions. So far, the question has appeared equally challenging for relaxations such as weak zero-knowledge and witness hiding. Protocols satisfying these relaxed notions under standard assumptions have at least four messages, just like full-fledged zero knowledge. The difficulty in improving round complexity stems from a fundamental barrier: none of these notions can be achieved in three messages via reductions (or simulators) that treat the verifier as a black box. We introduce a new non-black-box technique and use it to obtain the first protocols that cross this barrier under standard assumptions. Our main results are: \begin{itemize} \item Weak zero-knowledge for NPNP in two messages, assuming quasipolynomially-secure fully-homomorphic encryption and other standard primitives (known from quasipolynomial hardness of Learning with Errors), as well as subexponentially-secure one-way functions. \item Weak zero-knowledge for NPNP in three messages under standard polynomial assumptions (following for example from fully-homomorphic encryption and factoring). \end{itemize} We also give, under polynomial assumptions, a two-message witness-hiding protocol for any language L∈NPL \in NP that has a witness encryption scheme. This protocol is also publicly verifiable. Our technique is based on a new {\em homomorphic trapdoor paradigm}, which can be seen as a non-black-box analog of the classic Feige-Lapidot-Shamir trapdoor paradigm
    • …
    corecore